perm filename WINOGR[W83,JMC] blob sn#697414 filedate 1983-01-28 generic text, type C, neo UTF8
COMMENT āŠ—   VALID 00002 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00002 00002		26-Jan-83 JMC	27-Jan-83 RJT
C00036 ENDMK
CāŠ—;
	26-Jan-83 JMC	27-Jan-83 RJT

This file is a translation of an interview of Terry Winograd by
the French newspaper Le Monde.  It is taken from the file
MIT-AI:COMMON;WINO GRAD.  The original has Scribe decorations.

THE HOMO SAPIENS AND THE THINKING ROBOT

TRANSLATION

Le Monde Dimanche - 2 Janvier 1983


Just how far will the dialogue go between man and the new "intelligent"
machines he is creating?  This question is at the heart of the research
led by Terry Winograd, mathematician, computer scientist and professor
at Stanford University in California.

Computer sciences, teleprocessing and robotics permeate our lives and have some
important political-economic impacts.  Will we become an automated and nuclear
society under the influence of an increasingly efficient technology? Or,
on the contrary, will we know how to create "thinking robots" sensitive to
self-supervision thanks to their "artificial intelligence", and therefore
capable of cooperation with their human creators?

This questionning lies at the center of years of research conducted jointly 
by two scientists at Stanford University in California: one, Fernando Flores,
former economic minister under Allende, is a philosopher, the other, Professor
Terry Winograd, is a mathematician and computer scientist who explains the nature
of his own work in the following article.  His program, "SHRDLU", conceived in
1971 at MIT, was an innovation in computer science.

Q-  The robot "machina sapiens" seems to have become a myth.  What will its
relation be in the future with its relative the "homo sapiens"?
[The language of this question is typical of the French general purpose
intellectual.  It conceals vagueness and lack of definite ideas.  In
this case, AI is regarded as a species, but if challenged, the questioner
would retreat into talking about metaphor.  It is part of the literary
convention, however, that his vagueness not be challenged].

A-  The difference between myth and reality is that myth is from the domain
of science-fiction like the robot "Hal" in the film 2001 by Kubrick, and
that includes the "thinking robots".  Whereas, in reality, computers in
various forms are integral parts of our daily lives, not because they think
nor because they are "sapiens", but because they are capable of storing, of
manipulating and of transmitting human information.  If we analyze the ties
that already exist between man and the robot, we see that they resemble those
existing between a person and his tool, between the master and his servant.
[This answer is somewhat evasive, since the questioner (however vague and
fancy his language) wasn't talking about ordinary present computer programs].

Q-  It seems that we could never make a machine as perfect as the human brain.
Can we then deduce that the robots will never be able to challenge man's
mastery of the universe?
[It seems to whom?  The "we" who try - or someone else?  Again we have
robots as a species].

A-  This reflects a certain pretention on man's part, because he has not
"mastered" the universe; at the very most he partially controls his tiny
sphere.  Yet, the computer such as we have envisioned and constructed it, 
is only one element not any more independent than our automobile or our
television; it is still a tool that lacks control of itself.
[A pretension on the part of the questioner and an evasion on the
part of Winograd].

Q-  Nevertheless, will the robot be able to automatically revise its own
program progressively during execution, in order to handle unforseen
situations?
[The questioner persists.]

A-  There have been numerous attempts to create software which does not
follow a specific, preestablished program, but rather one which can develop
as needed its own program modeled after an initial open-ended device.
we are just at the beginning of this research: the devices operating
according to this model are not even as intelligent as bacteria.  However,
it is possible for a program to check itself: the difficulty lies in the
possibility of creating a device whose revisions would be
relevant, meaning machines which would not modify their original
program in any random manner, since this would be of no value.
[The answer is vague, and the comparison with bacteria inapt, since
the reader (like me) will have no opinion that bacteria have any
intelligence, but will suppose Terry knows about some.  I bet he
doesn't.]

    Thus,  we must know if it would be possible to design an initial device
having great adaptability to all the forseeable situations which would allow
it to perform necessary functional changes based on the original executive
program.  Nonetheless, the computer will never be capable of handling all the
unforseen situations, because it does not possess the spontaneous, adaptive
aptitude of the  brain.
[This is where I fundamentally disagree with Terry.  What is "spontaneous,
adaptive aptitude"?  Anything may appear spontaneous to someone who doesn't
understand its mechanism.  The point is to forsee situations in sufficient
generality, so that the program will react to situations not forseen in
detail].

Q-  Can we establish a correlation between the intelligence and the imagination
of the computer scientist, and the performance capacities of his robot?
[A reasonable question].

A- We have just discussed the possibilities of self-organizing systems.
[No real discussion unless something was omitted].
But, when we examine the programs actually executed, we see that they
strongly reflect the intelligence and the inventiveness of their
creators.  It is fascinating to compare the various artificial
intelligence programs with the personality of their creator who is
reflected as well in his robot as the writer is revealed in his book.
[He seems to have taken the idea from the questioner and repeats it
in other words, but without examples.]

THE COMPUTER SCIENTISTS ARE POETS-

Q-  The computer scientists, aren't they poets of sorts?
[A vague question].

A-  I think the best programmers have a bit of poetic blood in them.  A
computer program- like all artistic creations - has more possibilities
of perfection the more inspired the artist.
[An equally vague answer].

Q- You have expanded the writing of program logic by permitting the
robots to solve simultaneously the problems of syntax, semantics and
logic.  How does your robot 'SHRDLU' "hear", "understand" and "execute"
a command?
[The questioner has read something - perhaps Margaret Boden?]

A- The program I wrote was not really executed by a robot, but rather
simulated by a video screen: the commands were typed on a keyboard and
converted so that the information resulted in either a written response
or a simulation of a series of actions.  So, the video screen showed
what would really have happened if an actual robot had executed these
same "orders".
[An evasion of discussion of how SHRDLU works].

Q-  How can "SHRDLU" execute his dual function: his preoccupation with "words"-
talking with you- and his preoccupation with "things"- manipulating the blocks
in his universe?
[The questioner persists, but is confused about where the problems arise].

A-  My program stresses the linguistic interaction rather than the physical 
possibilities of the robot.  His double preoccupation, as you have correctly
pointed out, is the relation of the robot with the world of objects and
motions, and simultaneously, with the world of questions and orders from the
programmer.  The program was composed of an assortment of elements necessary
for language comprehension: a syntax component handled the grammatical
structure of the sentence, another semantic component analyzed the significance
of the words, and the last component was necessary to respond to questions or 
to execute complex actions.  On the screen, we saw the drawing of many objects
piled on a table: some pyramids, some cubes, some spheres ... When the 
programmer-analyst typed on the computer console "Raise the red cube", we saw
an arm reach for the cube and lift it towards the top of the screen as if it
were a real robot.
[An answer for once.]

Q-  According to Descartes, "No man is too stupid to not be able to
express his own ideas". Won't this be a key problem with robots?
[Descartes was wrong about this.  One's ideas are often quite difficult
to express. - Assuming the questioner hasn't misrepresented Descartes.]

A-  I am not sure if the problem is the possibility of expressing the ideas  or
that of having them.  The computer is capable of storing or manipulating human
ideas, but in one sense, it has no more ideas than a book, so the obstacles
concerning the use of language by computers  become fundamental since for them it
concerns their use of knowledge.
[Well it seems Terry doesn't believe in AI at all.]

Q-  Is this what Chomsky alluded to in saying that "the computer can achieve
the level of the 'verbal performance' of man, but never his 'theoretic
competence'"?

A-  If we take "theoretic competence" in its general significance, I would
agree that the computer can process language structures while it fails in
competence in the actual world.

    But Chomsky gives a more specific significance to the "theoretic
competence" by distinguishing the capacity to recognize sentences- what
he calls "competence", from the real capacity to use the language- what
he calls "performance".  And in this sense, the computer can achieve a
man's performance but not attain his theoretic competence.

THE IMPOSSIBILITY OF INNOVATION-

Q-  If learning for a robot only signifies the storing of more information,
couldn't it nonetheless acquire more elaborate processes of training?

A- A large part of the research in Artificial Intelligence treats
learning not as a process of storing information, but as a
reorganization.  The computer starts off with basic storage and during
the collection of information, this knowledge is reorganized and
restructured more efficiently as needed.  Therefore, more elaborate
learning processes exist, but they are still hindered by the initial
structure of the machine.  What a computer can learn is predetermined in
sense because it already knows it and it is difficult to imagine the
creation of a structure sufficiently general and open that it could give
rise to an innovative thought.
[No analysis of what innovation is.]

Q-  The finality is therefore inclusive in the initial conditions?
[The French learn such jargon in high school.]

A-  Maybe not the exact forms of the final structure, but certainly the
"family" from which they can evolve.

Q-  In a normal conversation, the interlocutors grasp the "sense" due not
only to their hearing but also to an extra-linguistic comprehension.  Could
a robot ever achieve this?

A-  There is a linguistic theory which considers language, primarily
in its syntaxical form with a logical and formal structure, as a mathematical
theorem.  In reality, this structure is just a small part of the language
which simultaneously conveys a whole range of implicit significances.
Based on this fact, the computer scientist must include the cultural
context in his programming.
[How about ordinary facts as SHRDLU did?  It's a bit fancy to call 
that a cultural context.]

Q-  More of the significances are also communicated by intonation and gests...

A-  This domain of extra-linguistic communication has barely been touched
by the computer scientists.
[A missed opportunity to explain how much can be done without gestures.
After all 1001 words is worth more than a picture.]

Q-  If language describes not only the events but also the "manner" according
to the user's vision of the world, can a computer ever be capable of translation?

A-  There are two sides to your question- on one hand, the fact that language
is an "action" and not a simple "representation" when I explain myself, I
don't only describe the state of the world, but by virtue of my word the
world becomes something else.  This is particularly evident when it concerns
a promise such as "I will come tomorrow" since by this promise by word, I
modify the situation.  The computer, not being a member of our society, is
therefore not qualified to act on his word.
[Nevertheless, a computer might be capable of keeping promises.  Sorry,
capable of doing what it said it would do.]

    Translation, by contrast, does not raise this problem, because it is
basically the reformulation from a certain linguistic structure to
another.  Its major problem concerning the comprehension of human
motivations is knowing why we chose to express ourselves in one way over
another, rather than knowing to what these words are referring.
[Evasion].

Q-  "Humanity only asks itself questions that it can answer", said Hegel.  Is
this why computers do not ask many questions?
[Again fanciness.  Anyway, Suppes's drill programs ask lots of
questions, even if they don't do it to get information.  The
LOTS, first greeting of the quarter, asks questions to which
it wants the answers.]

A- If computers "hold back" their questions, it is not solely because
they are incapable of responding, but because they do not have the
structure which would allow them to take the initiative to question.
Computers, whose programs "know how" to ask questions, first ask their
human interlocutor: in one sense, they would never know how to answer
themselves and thus give back the initiative to the human.  In my
opinion, the problem lies more in an absence of conceptual structure
capable of motivating this "questionning".

Q-  Will it be possible to develop someday?

A- It is possible to conceive of these computer programs having a wide
range of base intructions, so that they could formulate specific
questions.  Nevertheless, it seems improbable to me that the computer
could ever ask questions that were never foreseen by the programmer.
[This seems not very difficult.]

Q-  A type of question that might emerge from the right hemisphere of the brain?

A- There is certainly nothing in computer science which actually
corresponds to that.
[The right hemisphere is now fashionable, but one can be sure
that the fashion has little to do with the results of Sperry's
research.]

Q- Why did you choose to collaborate with Fernando Flores, philosopher
and former economic minister to Allende, rather than with another
scientist to develop your book 'Understanding the Computer and
Cognition'?
[Oh, what fun there will be in taking this book apart!  But it will
be a very fashionable book.]

A-  In fact, the opposite occurred: our collaboration came first, and the book
followed.  When Fernando Flores was liberated from prison in Chile, he came to
work at Stanford University.  We had a long series of dialogues in which we
discussed subjects analogous to the ones later developed in the book.  These
conversations were mutually enriching and gave birth to the book.

Q-  In what area did Fernando Flores stimulate you?

A- Being involved in policial life, he was very conscious of the social
impact of language and of its profound tie with action, whereas, with my
background as mathematician and computer scientist, I considered
language more as a formal system.  Thanks to his influence, I revised my
conception.
[Hilbert at the funeral of a student who committed suicide, said, "But
for this, he would have ...".  And then he stopped and said, "Well no.
He probably wouldn't have amounted to anything".]

GOVERNING BY COMPUTER-

Q-  Your recent study of "cognitive processes of language users" analyzed  the
narrow interrelations which exist between thought, language and action ...

A- Presently, it seems to me that we better understand these
interactions, nonetheless, there will not be a magic formula such as
E=MC2 to be discovered in all this complexity.  Our research resembles
that of a historian or an archeologist minutely examining an
accumulation of information from which certain structures emerge.  If we
consider language as an action rather than a representation, we must
then reexamine the linguistic structures themselves before seeing the
correspondences which exist between them and the actions carried out
based on the language.  And, if we designed computers capable of helping
people with their work, they ought to be structures according to the
conception of "language as action" which differs from the treatment of
that of computer science.

Q-  But our range of activities is limitless: we work, walk, laugh.  How can
you analyze such a quantity of possibilities?

A- We are particularly interested in the type of action brought about by
language.  I can create a promise, or even a whole new social
organization due to "words of honor", but I could never eat my sandwich
with a sentence.  Thus, throughout the extended range of human actions
there are a number of actions which are more restrictive, the linguistic
actions with which we work.
[What are they talking about?]

Q- This brings us to your analysis of the ultimate possibilities of
computer science: do you think we are inaugurating the "computer
revolution" whereby computers will seize the position of the "decision
maker"?
["computer revolution" doesn't ordinarily refer to anything like that.]

A- These are not technical problems, but rather political-economic
problems.  Their solution depends on the way in which the establishment
uses the computers.  One person can accept a decision generated by a
computer program, and he can even let the computer do all the work for
him, but finally he will be responsible for all the decisions since he
was the one who programmed the computer.  In such circumstances
computers can not share the decision-making power.  Also, the essential
question would more likely be to know if people who accept decisions
generated by the robot would be ready to keep the decision-making power
and if this were the case, in which domains and for what goals?
[How about the person who hired the programmer?]

Q- Pushing this reasoning to the extreme, can computer science evolve
differently according to the political regime of the country?

A- Technology does not determine its usages: not any technique can be
used in just any fashion and each apparatus only permits a certain range
of applications.  For example, computers can be very efficient in
promoting communication in a decentralized society: if I write a novel
and wish to distribute it I must first publish it which means that an
influential person in the publishing world must approve it.  If a
network of computerized communication existed, I could put my novel at
the disposition of anyone interested without any previous authorization.
In this sense, computerization would help decentralization, thus to help
reduce restrictive control.  But, simultaneously, computers can also be
used by a government censor in order to examine all information and thus
to prevent true written freedom.  The same device, the same computer
could thus be used for very different purposes, actually contradictory
ones. 
[The questioner asked about computer science].
*****

PB -  I think it's interesting to reflect on whether one would be likely
to find this level of discourse in a domestic newspaper.

TGD- How adaptive is the human organism?

   "Nonetheless, the computer will never be capable of handling all the
unforseen situations, because it does not possess the spontaneous, adaptive
aptitude of the  brain."

The brain often fails to adapt when stimuli arrive to quickly, or when
the stimuli are very unfamiliar.  Under stress, people often do precisely
the wrong thing.  Certainly, computers are presently much less adaptable
than people, but "never" is a very long time.

RJT - Has PB watched PBS much?

jmc - I don't find this a high level of discourse.  French intellectual
discussion is flashy but superficial and often not entirely unmeaningless,
and I think this is a prize example.

PB - Uhh, what do we mean by ``often not entirely unmeaningless''?
Is that phrase a case of itself?  Also, levels of discourse are relative.
I never said this one was particularly high by our (i.e. cs) standards.
Has RJT read newspapers much?

HPM - When I read the above translation yesterday it struck me as
bullshit.  I gave myself a one day cooling down period before
commenting, but my mind hasn't changed.  If this is representative
of the direction of Terry's thoughts in the fifteen years since
he did his thesis, no wonder so little has come of his work since.
Pompous (and WRONG!) philosophical pontifications are no substitute
for experimental results, which more often than not contradict
fatuous predjudgements like those voiced in the interview.  Merde du Chien!!

jmc - Now do you understand why inflation in Chile reached 500 percent
per year in 1973?

RJT - Not only that, I have a renewed appreciation for the relatively down-
to-earth bullshit that we get in the Stanford Ilyad and SF Chronic.   That
interview looked as if it had been translated by a computer, too.   Actually,
in slight disagreement with HPM, I would load part of the blame onto the
interviewER.   Most people who have ever been quoted or published in the
media have had good cause to be outraged at what was printed (or broadcast)
with their name on it.